209 research outputs found

    Locomotor illusions are generated by perceptual body-environment organization

    Get PDF
    While one is walking, the stimulation by one’s body forms a structure with the stimulation by the environment. This locomotor array of stimulation corresponds to the human-environment relation that one’s body forms with the environment it is moving through. Thus, the perceptual experience of walking may arise from such a locomotor array of stimulation. Humans can also experience walking while they are sitting. In this case, there is no stimulation by one’s walking body. Hence, one can experience walking although a basic component of a locomotor array of stimulation is missing. This may be facilitated by perception organizing the sensory input about one’s body and environment into a perceptual structure that corresponds to a locomotor array of stimulation. We examined whether locomotor illusions are generated by this perceptual formation of a locomotor structure. We exposed sixteen seated individuals to environmental stimuli that elicited either the perceptual formation of a locomotor structure or that of a control structure. The study participants experienced distinct locomotor illusions when they were presented with environmental stimuli that elicited the perceptual formation of a locomotor structure. They did not experience distinct locomotor illusions when the stimuli instead elicited the perceptual formation of the control structure. These findings suggest that locomotor illusions are generated by the perceptual organization of sensory input about one’s body and environment into a locomotor structure. This perceptual body-environment organization elucidates why seated human individuals experience the sensation of walking without any proprioceptive or kinaesthetic stimulation

    Affective Interaction in Smart Environments

    Get PDF
    AbstractWe present a concept where the smart environments of the future will be able to provide ubiquitous affective communication. All the surfaces will become interactive and the furniture will display emotions. In particular, we present a first prototype that allows people to share their emotional states in a natural way. The input will be given through facial expressions and the output will be displayed in a context-aware multimodal way. Two novel output modalities are presented: a robotic painting that applies the concept of affective communication to the informative art and an RGB lamp that represents the emotions remaining in the user's peripheral attention. An observation study has been conducted during an interactive event and we report our preliminary findings in this paper

    Internet of tangibles:Exploring the interaction-attention continuum

    Get PDF
    There is an increasing interest in the HCI research community to design richer user interactions with the Internet of Things (IoT). This studio will allow exploring the design of tangible interaction with the IoT, what we call Internet of Tangibles. In particular, we aim at investigating the full interaction-attention continuum, with the purpose of designing IoT tangible interfaces that can switch between peripheral interactions that do not disrupt everyday routines, and focused interactions that support user's reflections. This investigation will be conducted through hands-on activities where participants will prototype tangible IoT objects, starting by a paper prototyping phase, supported by design cards, and followed by an Arduino prototype phase. The purpose of the studio is also establishing a community of researchers and practitioners, from both academy and industry, interested in the field of tangible interaction with the Internet of Things

    Matching optical flow to motor speed in virtual reality while running on a treadmill

    Get PDF
    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care

    Classification of Drivers' Workload Using Physiological Signals in Conditional Automation

    Get PDF
    The use of automation in cars is increasing. In future vehicles, drivers will no longer be in charge of the main driving task and may be allowed to perform a secondary task. However, they might be requested to regain control of the car if a hazardous situation occurs (i.e., conditionally automated driving). Performing a secondary task might increase drivers' mental workload and consequently decrease the takeover performance if the workload level exceeds a certain threshold. Knowledge about the driver's mental state might hence be useful for increasing safety in conditionally automated vehicles. Measuring drivers' workload continuously is essential to support the driver and hence limit the number of accidents in takeover situations. This goal can be achieved using machine learning techniques to evaluate and classify the drivers' workload in real-time. To evaluate the usefulness of physiological data as an indicator for workload in conditionally automated driving, three physiological signals from 90 subjects were collected during 25 min of automated driving in a fixed-base simulator. Half of the participants performed a verbal cognitive task to induce mental workload while the other half only had to monitor the environment of the car. Three classifiers, sensor fusion and levels of data segmentation were compared. Results show that the best model was able to successfully classify the condition of the driver with an accuracy of 95%. In some cases, the model benefited from sensors' fusion. Increasing the segmentation level (e.g., size of the time window to compute physiological indicators) increased the performance of the model for windows smaller than 4 min, but decreased for windows larger than 4 min. In conclusion, the study showed that a high level of drivers' mental workload can be accurately detected while driving in conditional automation based on 4-min recordings of respiration and skin conductance

    Relevant Physiological Indicators for Assessing Workload in Conditionally Automated Driving, Through Three-Class Classification and Regression

    Get PDF
    In future conditionally automated driving, drivers may be asked to take over control of the car while it is driving autonomously. Performing a non-driving-related task could degrade their takeover performance, which could be detected by continuous assessment of drivers' mental load. In this regard, three physiological signals from 80 subjects were collected during 1 h of conditionally automated driving in a simulator. Participants were asked to perform a non-driving cognitive task (N-back) for 90 s, 15 times during driving. The modality and difficulty of the task were experimentally manipulated. The experiment yielded a dataset of drivers' physiological indicators during the task sequences, which was used to predict drivers' workload. This was done by classifying task difficulty (three classes) and regressing participants' reported level of subjective workload after each task (on a 0–20 scale). Classification of task modality was also studied. For each task, the effect of sensor fusion and task performance were studied. The implemented pipeline consisted of a repeated cross validation approach with grid search applied to three machine learning algorithms. The results showed that three different levels of mental load could be classified with a f1-score of 0.713 using the skin conductance and respiration signals as inputs of a random forest classifier. The best regression model predicted the subjective level of workload with a mean absolute error of 3.195 using the three signals. The accuracy of the model increased with participants' task performance. However, classification of task modality (visual or auditory) was not successful. Some physiological indicators such as estimates of respiratory sinus arrhythmia, respiratory amplitude, and temporal indices of heart rate variability were found to be relevant measures of mental workload. Their use should be preferred for ongoing assessment of driver workload in automated driving

    Effect of Obstacle Type and Cognitive Task on Situation Awareness and Takeover Performance in Conditionally Automated Driving

    Get PDF
    In conditionally automated driving, several factors can affect the driver’s situation awareness and ability to take over control. To better understand the influence of some of these factors, 88 participants spent 20 minutes in a conditionally automated driving simulator. They had to react to four obstacles that varied in danger and movement. Half of the participants were required to engage in a verbal cognitive non-driving-related task. Situation awareness, takeover performance and physiological responses were measured for each situation. The results suggest that obstacle movement influences obstacle danger perception, situation awareness, and response time, while the latter is also influenced by obstacle danger. The cognitive verbal task also had an effect on the takeover response time. These results imply that the driver’s cognitive state and the driving situation (e.g. the movement/danger of entities present around the vehicle) must be considered when conveying information to drivers through in-vehicle interfaces

    Gesturing on the steering wheel, a comparison with speech and touch interaction modalities

    Get PDF
    This paper compares an emergent interaction modality for the In-Vehicle Infotainment System (IVIS), i.e., gesturing on the steering wheel, with two more popular modalities in modern cars: touch and speech. We conducted a betweensubjects experiment with 20 participants for each modality to assess the interaction performance with the IVIS and the impact on the driving performance. Moreover, we compared the three modalities in terms of usability, subjective workload and emotional response. The results showed no statically significant differences between the three interaction modalities regarding the various indicators for the driving task performance, while significant differences were found in measures of IVIS interaction performance: users performed less interactions to complete the secondary tasks with the speech modality, while, in average, a lower task completion time was registered with the touch modality. The three interfaces were comparable in all the subjective metrics
    • …
    corecore